Feature Equilibrium: An Adversarial Training Method to Improve Representation Learning
نویسندگان
چکیده
Abstract Over-fitting is a significant threat to the integrity and reliability of deep neural networks with generous parameters. One problem that model learned more specific features than general in training process. To solve problem, we propose an adversarial method assist strengthening representation learning. In this method, make classification as generator G introduce unsupervised discriminator D distinguish hidden feature from real images limit their spatial distance. Notably, will fall into trap perfect resulting gradient confrontation loss 0 after overtraining. avoid situation, train probability $$P_{c}$$ P c . Our proposed easy incorporate existing frameworks. It has been evaluated under various network architectures over different fields datasets. Experiments show low computational cost, outperforms benchmark by 1.5–2 points on For semantic segmentation VOC, our achieves 2.2 higher mAP.
منابع مشابه
Adversarial Feature Learning
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing generators learn to “linearize semantics” in the latent space of such models. Intuitively, such latent spaces may serve as useful feature representation...
متن کاملA Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN’s resilience to adversarial attacks, namely, adversarial training. Our experiments show that d...
متن کاملControllable Invariance through Adversarial Feature Learning
Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze ...
متن کاملSpeaker-Invariant Training via Adversarial Learning
We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system. We call the scheme speaker-invariant training (SIT). In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to mini...
متن کاملLearning Privacy Preserving Encodings through Adversarial Training
We present a framework to learn privacypreserving encodings of images (or other highdimensional data) to inhibit inference of a chosen private attribute. Rather than encoding a fixed dataset or inhibiting a fixed estimator, we aim to to learn an encoding function such that even after this function is fixed, an estimator with knowledge of the encoding is unable to learn to accurately predict the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Computational Intelligence Systems
سال: 2023
ISSN: ['1875-6883', '1875-6891']
DOI: https://doi.org/10.1007/s44196-023-00229-2